Skip to content

run pytest against nightly #238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Sep 3, 2022
Merged

run pytest against nightly #238

merged 4 commits into from
Sep 3, 2022

Conversation

twoertwein
Copy link
Member

Not sure whether it will succeed if pytest fails.

@twoertwein
Copy link
Member Author

Might be good to file an issue with pyarrow to handle pandas nightly

ImportError: pyarrow requires pandas 0.23.0 or above, pandas 0+unknown is installed

@twoertwein
Copy link
Member Author

Might be good to file an issue with pyarrow to handle pandas nightly

ImportError: pyarrow requires pandas 0.23.0 or above, pandas 0+unknown is installed

I guess only an issue on older pythons.....

@twoertwein twoertwein marked this pull request as draft August 31, 2022 02:41
@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Aug 31, 2022

Looking at this, I don't see where you install the nightly build, but maybe I'm missing something or you're still working on that part??

@twoertwein
Copy link
Member Author

Looking at this, I don't see where you install the nightly build, but maybe I'm missing something or you're still working on that part??

This install (and uninstalls nightly):

steps = [_step.nightly] if nightly else []

The issue is that I don't yet know how to make github ignore errors.

@bashtage
Copy link
Contributor

Add "--upgrade","--use-deprecated=legacy-resolver" to the pip command that installs the nightly.

@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Aug 31, 2022

Looking at this, I don't see where you install the nightly build, but maybe I'm missing something or you're still working on that part??

This install (and uninstalls nightly):

steps = [_step.nightly] if nightly else []

I think it was here, which I didn't remember was in the code:

def nightly_pandas():

@twoertwein
Copy link
Member Author

Add "--upgrade","--use-deprecated=legacy-resolver" to the pip command that installs the nightly.

Thank you! I simplify the install/uninstall. Why is --use-deprecated=legacy-resolver needed?

@bashtage
Copy link
Contributor

It has to do with how pip check versions. Legacy takes the latest. Without this it will download every whl file to get Metadata before deciding on the best fit.

@twoertwein twoertwein marked this pull request as ready for review August 31, 2022 21:13
@twoertwein
Copy link
Member Author

I'm surprised we get these warnings:

/home/runner/work/pandas-stubs/pandas-stubs/tests/test_frame.py:290: FutureWarning: The 'inplace' keyword in DataFrame.set_index is deprecated and will be removed in a future version. Use df = df.set_index(..., copy=False) instead.
res6: None = df.set_index("col1", inplace=True)

I thought that change is already reverted on main

@bashtage bashtage closed this Sep 1, 2022
@bashtage bashtage reopened this Sep 1, 2022
@bashtage
Copy link
Contributor

bashtage commented Sep 1, 2022

Reopen to trigger CI

Copy link
Collaborator

@Dr-Irv Dr-Irv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So while we don't exit on failure if the test against nightly builds fails, if there was a failure, would it show up red in the CI report?

@twoertwein
Copy link
Member Author

So while we don't exit on failure if the test against nightly builds fails, if there was a failure, would it show up red in the CI report?

It will always be "green". As far as I know there is no "yellow".

@twoertwein
Copy link
Member Author

There might be a way to mark the CI run as skipped if it fails. Then we would have a visual indicator.

@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Sep 2, 2022

There might be a way to mark the CI run as skipped if it fails. Then we would have a visual indicator.

Without any visual indicator, I'm not sure how we would ever notice that something was revealed via this test. Can you investigate that?

@twoertwein
Copy link
Member Author

I didn't find a way to visually indicate an error except by failing (failing now).

I also made pytest error on any warnings. This fails currently on nightly but #251 might fix that

@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Sep 2, 2022

So I like what the PR is doing, but it is raising some questions in how we want to manage the project. Opinions wanted!

  1. If we see a failure in the job due to the nightly build, should we expect users to fix the code before accepting the PR?
  2. If the answer to (1) is yes, then it's easy to manage.
  3. If the answer to (2) is no, then that brings up a couple of questions (4) and (5)
  4. Should we create a separate job that only tests the nightly build, and then accept PR's where all the other tests pass?
  5. Should we manage the project by having a "main_release" branch, corresponding to what we release weekly as we improve the stubs for the released version of pandas, and a "nightly" branch, corresponding to what is developed to work for a future version of pandas? (Would have to decide which one of those is main)

@twoertwein
Copy link
Member Author

Could also make pytest-nightly run in its own workflow, then it is clear that ubuntu+3.10 did not fail for any other reason.

@twoertwein
Copy link
Member Author

1. If we see a failure in the job due to the nightly build, should we expect users to fix the code before accepting the PR?

If pandas nightly causes all new PRs to have errors - I would not expect people to fix that as part of their PR. But if people want to add new annotations that trigger future/deprecation warnings, I would be inclined to request changes :)

4. Should we create a separate job that only tests the nightly build, and then accept PR's where all the other tests pass?

In either case, it probably makes sense to have a separate workflow just for nightly.

5. Should we manage the project by having a "main_release" branch, corresponding to what we release weekly as we improve the stubs for the released version of pandas, and a "nightly" branch, corresponding to what is developed to work for a future version of pandas?  (Would have to decide which one of those is `main`)

I really like the current simplicity of pandas-stubs. If there are cases where we do not (yet) want to be in-line with nightly, we could simply have a pytest skip decorator based on the pandas version.

@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Sep 2, 2022

In either case, it probably makes sense to have a separate workflow just for nightly.

So let's do that, and we'll operate on the principle that if the nightly fails, and everything else passes, then we'll accept the PR.

I really like the current simplicity of pandas-stubs.

Me too!!!

If there are cases where we do not (yet) want to be in-line with nightly, we could simply have a pytest skip decorator based on the pandas version.

Good idea

@twoertwein
Copy link
Member Author

I will mark this PR as draft and then debug sharing steps between the general checks and pytest nightly.

@twoertwein twoertwein changed the title optional pytest run against nightly run pytest against nightly Sep 2, 2022
@twoertwein twoertwein marked this pull request as ready for review September 2, 2022 20:08
@twoertwein twoertwein requested a review from Dr-Irv September 2, 2022 20:09
@twoertwein
Copy link
Member Author

Might be worth waiting for #251 to keep the nice green check mark :)

@bashtage
Copy link
Contributor

bashtage commented Sep 3, 2022

Can you make test/nightly orange for failure rather than red?

@twoertwein
Copy link
Member Author

Can you make test/nightly orange for failure rather than red?

I did not find an option for that.

@bashtage
Copy link
Contributor

bashtage commented Sep 3, 2022

statsmodels has it on their pip pre run if pytest fails. If you merge this I'll see if I can figure out what is needed in a later PR.

Copy link
Collaborator

@Dr-Irv Dr-Irv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @twoertwein

@Dr-Irv Dr-Irv merged commit b072842 into pandas-dev:main Sep 3, 2022
@Dr-Irv
Copy link
Collaborator

Dr-Irv commented Sep 3, 2022

I think there is a way to mark a run as "allowed to fail". At least I remember seeing it somewhere.

Hopefully @bashtage can figure it out!

@twoertwein
Copy link
Member Author

If it is not possible to make it warn-only but not fail, an option could be to run it only on main (after PRs are merged) or the reverse (only for PRs but not on main): then we have some more green :)

@bashtage
Copy link
Contributor

bashtage commented Sep 5, 2022

If it is not possible to make it warn-only but not fail, an option could be to run it only on main (after PRs are merged) or the reverse (only for PRs but not on main): then we have some more green :)

Looks pretty hopeless for now. Seems GHA doesn't have a yellow/warn mode despite the similarities to Azure (the I suspect that it runs on Azure).

@bashtage
Copy link
Contributor

bashtage commented Sep 5, 2022

I would always pass the run and have it there in CI so that interested parties could easily see what doing on.

The alternative would be to add 1 Azure run which could be configured to be yellow. If you want the Azure run, I can set it up.

@twoertwein
Copy link
Member Author

Is there a way to customize the failing/successful after n seconds message:
image

We could then let it always succeed but at least change the message to failed :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TST: Add an unfailable CI run against pandas main
3 participants